Survey on Unsupervised Domain Adaptation for Semantic Segmentation for Visual Perception in Automated Driving

نویسندگان

چکیده

Deep neural networks (DNNs) have proven their capabilities in the past years and play a significant role environment perception for challenging application of automated driving. They are employed tasks such as detection, semantic segmentation, sensor fusion. Despite tremendous research efforts, several issues still need to be addressed that limit applicability DNNs The bad generalization unseen domains is major problem on way safe, large-scale application, because manual annotation new costly, particularly segmentation. For this reason, methods required adapt without labeling effort. This task termed unsupervised domain adaptation (UDA). While different shifts challenge DNNs, shift between synthetic real data particular importance driving, it allows use simulation environments DNN training. We present an overview current state art field. categorize explain approaches UDA. number considered publications larger than any other survey topic. also go far beyond description UDA state-of-the-art, we quantitative comparison point out latest trends conduct critical analysis state-of-the-art highlight promising future directions. With survey, aim facilitate further encourage scientists exploit novel

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Unsupervised Domain Adaptation for Semantic Segmentation with GANs

Visual Domain Adaptation is a problem of immense importance in computer vision. Previous approaches showcase the inability of even deep neural networks to learn informative representations across domain shift. This problem is more severe for tasks where acquiring hand labeled data is extremely hard and tedious. In this work, we focus on adapting the representations learned by segmentation netwo...

متن کامل

Deep Unsupervised Domain Adaptation for Image Classification via Low Rank Representation Learning

Domain adaptation is a powerful technique given a wide amount of labeled data from similar attributes in different domains. In real-world applications, there is a huge number of data but almost more of them are unlabeled. It is effective in image classification where it is expensive and time-consuming to obtain adequate label data. We propose a novel method named DALRRL, which consists of deep ...

متن کامل

Unsupervised Domain Adaptation for Joint Segmentation and POS-Tagging

Sophisticated models have been developed for joint word segmentation and part-of-speech tagging, with increasing accuracies reported on the Chinese Treebank data. These systems, which rely on supervised learning, typically perform worse on texts from a different domain, for which little annotation is available. We consider self-training and character clustering for domain adaptation. Both metho...

متن کامل

Boosting for Unsupervised Domain Adaptation

To cope with machine learning problems where the learner receives data from different source and target distributions, a new learning framework named domain adaptation (DA) has emerged, opening the door for designing theoretically well-founded algorithms. In this paper, we present SLDAB, a self-labeling DA algorithm, which takes its origin from both the theory of boosting and the theory of DA. ...

متن کامل

Sample-oriented Domain Adaptation for Image Classification

Image processing is a method to perform some operations on an image, in order to get an enhanced image or to extract some useful information from it. The conventional image processing algorithms cannot perform well in scenarios where the training images (source domain) that are used to learn the model have a different distribution with test images (target domain). Also, many real world applicat...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: IEEE Access

سال: 2023

ISSN: ['2169-3536']

DOI: https://doi.org/10.1109/access.2023.3277785